Recent studies on adversarial images have shown that they tend to leave the underlying low-dimensional data manifold, making them significantly more challenging for current models to make correct predictions. This so-called off-manifold conjecture has inspired a novel line of defenses against adversarial attacks on images. In this study, we find a similar phenomenon occurs in the contextualized embedding space induced by pretrained language models, in which adversarial texts tend to have their embeddings diverge from the manifold of natural ones. Based on this finding, we propose Textual Manifold-based Defense (TMD), a defense mechanism that projects text embeddings onto an approximated embedding manifold before classification. It reduces the complexity of potential adversarial examples, which ultimately enhances the robustness of the protected model. Through extensive experiments, our method consistently and significantly outperforms previous defenses under various attack settings without trading off clean accuracy. To the best of our knowledge, this is the first NLP defense that leverages the manifold structure against adversarial attacks. Our code is available at \url{https://github.com/dangne/tmd}.
translated by 谷歌翻译
在过去的两年中,从2020年到2021年,Covid-19在包括越南在内的许多国家 /地区都破坏了预防疾病措施,并对人类生活和社会社区的各个方面产生了负面影响。此外,社区中的误导性信息和有关大流行的虚假新闻也是严重的情况。因此,我们提出了第一个基于越南社区的问题答复数据集,用于开发COVID-19的问题答案系统,称为UIT-VICOV19QA。该数据集包括从可信赖的医疗来源收集的4,500对提问,至少有一个答案,每个问题最多有四个独特的解释答案。除数据集外,我们还建立了各种深度学习模型作为基线,以评估数据集的质量,并通过BLEU,Meteor和Rouge-l等常用指标来进一步研究基准结果,以进行进一步的研究。我们还说明了对这些模型进行多个解释答案的积极影响,尤其是在变压器上 - 研究领域的主要结构。
translated by 谷歌翻译
随着图像文本对的大量数据以及视觉和语言(V&L)任务的多样性,学者在该研究领域引入了大量的深度学习模型。此外,近年来,转移学习还显示出在计算机愿景中的巨大成功,例如图像分类,对象检测等以及在自然语言处理中以进行问答,机器翻译等的自然语言处理。继承转移学习的精神, V&L的研究工作已经在大规模数据集上设计了多种预训练技术,以增强下游任务的性能。本文的目的是提供当代V&L预审前模型的全面修订。特别是,我们对预处理的方法进行了分类和描述,以及最先进的视觉和语言预训练模型的摘要。此外,还提供了培训数据集和下游任务的列表,以进一步提高V&L预处理的观点。最后,我们决定采取进一步的一步,讨论众多未来研究的方向。
translated by 谷歌翻译
基于1-HOP邻居之间的消息传递(MP)范式交换信息的图形神经网络(GNN),以在每一层构建节点表示。原则上,此类网络无法捕获在图形上学习给定任务的可能或必需的远程交互(LRI)。最近,人们对基于变压器的图的开发产生了越来越多的兴趣,这些方法可以考虑超出原始稀疏结构以外的完整节点连接,从而实现了LRI的建模。但是,仅依靠1跳消息传递的MP-gnn与位置特征表示形式结合使用时通常在几个现有的图形基准中表现得更好,因此,限制了Transferter类似体系结构的感知效用和排名。在这里,我们介绍了5个图形学习数据集的远程图基准(LRGB):Pascalvoc-SP,Coco-SP,PCQM-Contact,Peptides-Func和肽结构,可以说需要LRI推理以在给定的任务中实现强大的性能。我们基准测试基线GNN和Graph Transformer网络,以验证捕获长期依赖性的模型在这些任务上的性能明显更好。因此,这些数据集适用于旨在捕获LRI的MP-GNN和Graph Transformer架构的基准测试和探索。
translated by 谷歌翻译
我们提出了一个食谱,讲述了如何建立具有线性复杂性和最先进的结果的一般,功能可扩展的(GPS)图形变压器,并在各种基准测试基准上。 Graph Transformers(GTS)在图形表示学习领域中获得了多种近期出版物的知名度,但它们对构成良好的位置或结构编码的共同基础以及与众不同的区别。在本文中,我们总结了具有更清晰的定义的不同类型的编码,并将其分类为$ \ textit {local} $,$ \ textit {global} $或$ \ textit {fextit {ferseal} $。此外,GTS仍被限制在具有数百个节点的小图上,我们提出了第一个具有复杂性线性的体系结构对节点和边缘$ O(n+e)$的数量,通过将局部实质汇总从完全 - 连接的变压器。我们认为,这种解耦并不会对表现性产生负面影响,而我们的体系结构是图形的通用函数近似器。我们的GPS配方包括选择3种主要成分:(i)位置/结构编码,(ii)局部消息通讯机制和(iii)全局注意机制。我们构建和开源一个模块化框架$ \ textit {graphgps} $,该{GraphGps} $支持多种类型的编码,并且在小图和大图中提供效率和可扩展性。我们在11个基准测试上测试了我们的体系结构,并对所有这些基准显示出非常具竞争力的结果,展示了由模块化和不同策略组合获得的经验益处。
translated by 谷歌翻译
预先接受训练的语言模型的微调在许多NLP字段中取得了巨大的成功。然而,令人惊讶的是对抗性示例的群体,例如,使用同义词只使用同义词的单词替代攻击可以轻松愚弄基于伯的情绪分析模型。在本文中,我们证明了对抗性训练,普遍的防御技术,不直接符合传统的微调场景,因为它严重受到灾难性的遗忘:未能保留已经被捕获的通用和强大的语言特征预先接受的模型。在这种光线中,我们提出了一种来自信息理论观点的新型对抗性微调方法的强大信息微调(Rift)。特别地,裂谷鼓励在整个微调过程中保留从预先训练模型中学到的特征的客观模型,而传统的则仅使用预先训练的重量进行初始化。实验结果表明,裂口始终如一地优于两种流行的NLP任务:情绪分析和自然语言推断,在各种预先训练的语言模型中的不同攻击下。
translated by 谷歌翻译
目前最先进的交叉逻辑摘要模型采用了多任务学习范例,它适用于共享词汇模块,并依赖于自我关注机制以两种语言参加令牌。然而,通过自我关注汲取的相关性往往松动和隐含,效率效率低,捕获语言之间的至关重要的交叉表示。在用单独的形态或结构特征进行语言时,此事恶化,使交叉对齐更具挑战性,导致性能下降。为了克服这一问题,我们提出了一种新颖的知识蒸馏的跨语言摘要框架,寻求通过蒸馏到单语摘要教师进入交叉综合学生的知识来明确构建交叉关联。由于教师和学生的代表介绍了两种不同的向量空间,我们进一步提出了使用污水偏差,最佳运输距离的知识蒸馏损失,以估计这些教师和学生表示之间的差异。由于陷入困境的直观的几何性质,学生模型可以高效地学习与单声道隐藏状态对齐其产生的交叉隐藏状态,因此导致远方语言之间的强烈相关性。对遥控语言成对的交叉语言摘要数据集的实验表明,我们的方法在高资源和低资源的设置下优于最先进的模型。
translated by 谷歌翻译
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
translated by 谷歌翻译
Adversarial machine learning has been both a major concern and a hot topic recently, especially with the ubiquitous use of deep neural networks in the current landscape. Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time. On one hand, the goal is to develop strong and robust deep networks that are resistant to malicious actors. On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models. Most of existing attacks employs a single $\ell_p$ distance (commonly, $p\in\{1,2,\infty\}$) to define the concept of closeness and performs steepest gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial example in the same way. These $\ell_p$ attacks each has its own pros and cons; and there is no single attack that can successfully break through defense models that are robust against multiple $\ell_p$ norms simultaneously. Motivated by these observations, we come up with a natural approach: combining various $\ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation. Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples. Finally, through various experiments with standardized benchmarks, we show that our method outperforms most current strong attacks across state-of-the-art defense mechanisms, while retaining its ability to remain clean visually.
translated by 谷歌翻译
Pareto Front Learning (PFL) was recently introduced as an effective approach to obtain a mapping function from a given trade-off vector to a solution on the Pareto front, which solves the multi-objective optimization (MOO) problem. Due to the inherent trade-off between conflicting objectives, PFL offers a flexible approach in many scenarios in which the decision makers can not specify the preference of one Pareto solution over another, and must switch between them depending on the situation. However, existing PFL methods ignore the relationship between the solutions during the optimization process, which hinders the quality of the obtained front. To overcome this issue, we propose a novel PFL framework namely \ourmodel, which employs a hypernetwork to generate multiple solutions from a set of diverse trade-off preferences and enhance the quality of the Pareto front by maximizing the Hypervolume indicator defined by these solutions. The experimental results on several MOO machine learning tasks show that the proposed framework significantly outperforms the baselines in producing the trade-off Pareto front.
translated by 谷歌翻译